79 research outputs found
Limited Communications Distributed Optimization via Deep Unfolded Distributed ADMM
Distributed optimization is a fundamental framework for collaborative
inference and decision making in decentralized multi-agent systems. The
operation is modeled as the joint minimization of a shared objective which
typically depends on observations gathered locally by each agent. Distributed
optimization algorithms, such as the common D-ADMM, tackle this task by
iteratively combining local computations and message exchanges. One of the main
challenges associated with distributed optimization, and particularly with
D-ADMM, is that it requires a large number of communications, i.e., messages
exchanged between the agents, to reach consensus. This can make D-ADMM costly
in power, latency, and channel resources. In this work we propose unfolded
D-ADMM, which follows the emerging deep unfolding methodology to enable D-ADMM
to operate reliably with a predefined and small number of messages exchanged by
each agent. Unfolded D-ADMM fully preserves the operation of D-ADMM, while
leveraging data to tune the hyperparameters of each iteration of the algorithm.
These hyperparameters can either be agent-specific, aiming at achieving the
best performance within a fixed number of iterations over a given network, or
shared among the agents, allowing to learn to distributedly optimize over
different networks. For both settings, our unfolded D-ADMM operates with
limited communications, while preserving the interpretability and flexibility
of the original D-ADMM algorithm. We specialize unfolded D-ADMM for two
representative settings: a distributed estimation task, considering a sparse
recovery setup, and a distributed learning scenario, where multiple agents
collaborate in learning a machine learning model. Our numerical results
demonstrate that the proposed approach dramatically reduces the number of
communications utilized by D-ADMM, without compromising on its performance
Asymptotic Task-Based Quantization with Application to Massive MIMO
Quantizers take part in nearly every digital signal processing system which
operates on physical signals. They are commonly designed to accurately
represent the underlying signal, regardless of the specific task to be
performed on the quantized data. In systems working with high-dimensional
signals, such as massive multiple-input multiple-output (MIMO) systems, it is
beneficial to utilize low-resolution quantizers, due to cost, power, and memory
constraints. In this work we study quantization of high-dimensional inputs,
aiming at improving performance under resolution constraints by accounting for
the system task in the quantizers design. We focus on the task of recovering a
desired signal statistically related to the high-dimensional input, and analyze
two quantization approaches: We first consider vector quantization, which is
typically computationally infeasible, and characterize the optimal performance
achievable with this approach. Next, we focus on practical systems which
utilize hardware-limited scalar uniform analog-to-digital converters (ADCs),
and design a task-based quantizer under this model. The resulting system
accounts for the task by linearly combining the observed signal into a lower
dimension prior to quantization. We then apply our proposed technique to
channel estimation in massive MIMO networks. Our results demonstrate that a
system utilizing low-resolution scalar ADCs can approach the optimal channel
estimation performance by properly accounting for the task in the system
design
Adaptive KalmanNet: Data-Driven Kalman Filter with Fast Adaptation
Combining the classical Kalman filter (KF) with a deep neural network (DNN)
enables tracking in partially known state space (SS) models. A major limitation
of current DNN-aided designs stems from the need to train them to filter data
originating from a specific distribution and underlying SS model. Consequently,
changes in the model parameters may require lengthy retraining. While the KF
adapts through parameter tuning, the black-box nature of DNNs makes identifying
tunable components difficult. Hence, we propose Adaptive KalmanNet (AKNet), a
DNN-aided KF that can adapt to changes in the SS model without retraining.
Inspired by recent advances in large language model fine-tuning paradigms,
AKNet uses a compact hypernetwork to generate context-dependent modulation
weights. Numerical evaluation shows that AKNet provides consistent state
estimation performance across a continuous range of noise distributions, even
when trained using data from limited noise settings
- …